#IAMUTOPIAN: Thoughts and feedback on the Translation category from a LM

in #utopian-io6 years ago

8EFEBD7D-95E7-4F1F-8E70-EC94BB64D769.jpeg

Hello everyone and welcome to my first #iamutopian post.

I am a Translator & Language Manager for the Italian team witting the @utopian-io and @davinci.polyglot partnership. The Translation category has been active since the last June and seems to have been taken Utopian by storm, being one of the forces (if not the direct cause) behind the recent developments in mana distribution. Because of the effects these recent changes had on the category, plus some underlying issues that needed to be addressed anyways, these last couple of weeks have seen a concrete rise in discussions and proposals within the community. First on the Discord server and later through SteemIt posts.

The Greek Team recently published a really good breakdown of the many issues that have been raised against the current system through their common profile, @aristotle.team. As it’s sometimes the case with me, I began writing a comment over there, which became a rather detailed comment, and finally an overlong comment.
Then @elear, the man behind Utopian, published his own post asking for comments and suggestions and I realized my contribution was becoming too big for a simple comment. Besides, LMs have often been encouraged, in the past, to contribute with #iamutopian posts. And so, here I am.

I’ll try to condense my general thoughts on the most prominent issues regarding the Translation category in this post.


THE QUESTIONNAIRE


AC75A783-64B5-45A9-8E94-4CEBB7D2014B.jpeg

Pixabay


I was part of the @davinci.polyglot translation project before the @utopian-io partnership began and I’ve been a part of the Italian translation team since the beginning. I first applied as a translator in June but was later asked by my LM, @mcassani, to join him as Language Manager, as our team was getting bigger. I agreed and began reviewing in August. That’s when I first encountered the Utopian Questionnaire and began using it to score my Translators’ posts.

At first, everything seemed to run quite smoothly. I wasn’t crazy about the questionnaire, especially since the actual scoring options were either quite limited or too broad, but it seemed to work sufficiently well. The problems began when we grew too much (maybe too soon) and started clogging Utopian’s mana. Some posts lost the chance to be rewarded and people started looking more closely at how everyone else was doing, discovering that either some teams were too strict or some others were too lenient in scoring, causing imbalance in payouts.
At the moment, I’m not particularly concerned about figuring out whether the more rewarding LMs were scoring higher in good faith or with malicious intent, even though this might also be a subject worthy of its own investigation. What concerns me is the fact that this kind of imbalance was made possible thanks to a less-than-perfect questionnaire and giving LMs better and more efficient tools to work with should be our number one priority right now.

The questionnaire definitely needs to be modified for the better as soon as possible. I’ll do a rundown of the current questions, with my commentary on how I think they could be improved.
I began writing this before the proposal for a new questionnaire started circulating, so I’ll just add feedback on that at the end of each section.

How would you describe the formatting, language and overall presentation of the post?

I’m a strong believer in a fixed Average vote for this. Not only have I not personally seen any particularly outstanding translation posts (I’ll admit, I haven’t browsed around much), but I highly doubt there’s really any way to make the posts we need to write particularly interesting and personal. There’s a fixed set of information we need to put in the post and that’s about it.
I had the chance to personalize a few translation posts I did for Consul with more than generic information, but that was because that specific project gave me the chance to reflect more upon certain terms and context (you can see them here and here). Other projects I’ve reviewed don’t give the same opportunity. Node.js has a lot of interesting technical terminology which translators could maybe expound upon, but Google and Wikipedia already explain most terms much better than we could.
I’ve also recently happened to see one of @dimitrisp’s Node.js posts which I agree was above average in presentation. He wins on character and spunk, though, because he presents his work in a very personal way and spends more than a few minutes looking for funny GIFs to give his post some more color, not because there’s really anything more that could be said about the project and/or translation.

On the other hand, I’ve seen very average posts written in less-than-average English. At least those few sentences that weren’t simple quotations from file descriptors. Which is definitely not a good calling card for translators, since that’s all anyone who’s not able to judge the actual translation will be able to see of their work. Since we’re all translating from English, it could be argued that proficiency in English written production is not a fundamental requirement, yet part of our “job” is to present the work we’ve done, in order to receive a reward for it, and when we post translation reports we somehow represent @utopian-io and @davinci.polyglot. Therefore our posts are relevant, even though they are not the core of what we’re being rewarded for.

With all this in mind, what I propose is to split this question into two different ones, with very limited answers and not a huge impact on the final score. Above average posts and/or posts that are well written will get a marginally higher score than average posts and/or posts that are written in poor English.

Does this post present the work done in a personal, engaging, or anyway outstanding format? Yes / No

How would you rate the grammar, syntax, and overall linguistic level of this post? Good / Poor

With regards to the new questionnaire proposal I believe that breaking this single sub-section into five different questions is simply ridiculous. Yes, as I’ve acknowledged myself, a good post is important. But it’s also not what our “job” as translators is about. In my opinion, two questions with straight Yes / No or Good / Poor answers are more than enough to score this part.
I don’t really understand the plagiarism and citation aspects very well. If those seem to be two recurring issues we might just ask translators to pay more attention and simply not score their contribution unless they solve these issues pronto (i.e. within the three days review window). If we worry about plagiarism, knocking just a few points off the final score and still rewarding that post with a 30$+ upvote is a little absurd.

How would you rate the overall value of this contribution on the open source community?

I think we can all agree that this question has proven to be useless. We’ve all been repeatedly encouraged to recognize only some value to the open source community for all and any translation, therefore this question can simply be removed.

The new questionnaire removed it, as well.

What is the total volume of the translated text in this contribution.

Translators have often been encouraged to produce posts that are at least 1000 words. This was one of the many steps taken through the weeks to avoid a complete takeover of the Utopian voting system by the Translation category, since the questionnaire works in such a way that a higher volume of words produces a higher score and yet posts with a minimum amount of words would still cash in a rather good payout (and consume VP or mana). This could encourage translators to produce multiple small contributions rather than a bigger one.
If we now assume this as a standard rule, the option to score under 1000 words would be needed only for the last contribution to each project, in case there’s less than 1000 words left. On the other hand, rewarding bigger contributions with a higher score might be nice, but I see a very basic problem with adding too many options. We’ve already been stretching Utopian’s mana rather thin: I don’t think we can afford to reward much more of what’s currently being given out, so there might not be enough to reach much higher than 1000 words per post.
If this is true, I see only two possible solutions: either we limit the available options to three or four or we make it a rule that contributions should be at least 1000 words and make another simple Yes / No answer if it reaches that threshold or not. We’d still be rewarding a translation based on the number of translated words, we’d simply be imposing a fixed schedule on when to post (roughly every 1000 words or possibly even higher).

So the question would be either

What was the volume of the translation outlined in this post (excluding duplicate strings and non translatable words)? Less than 1000 words / Between 1000 and 2000 words / More than 2000 words or, if feasible, Less than 1000 words / Between 1000 and 2000 words / Between 2000 and 3000 words / More than 3000 words

or

Was the volume of the translation outlined in this post higher than 1000 words (excluding duplicate strings and non translatable words)? Yes / No

I see that the new questionnaire added rather than subtracted steps. As I said, it would seem that reducing the option and consequently keeping all contributions roughly similar would help with keeping our mana in check. But if that’s not the case, I have no objection to adding more scalability in scoring rather than less. This is something I’d leave to the SteemIt “professionals” to ascertain.

How would you rate is the overall difficulty of the translated text in this contribution?

This seems to have been the highest point of contention, lately.
Personally, I believe objective difficulty ratings should be attributed directly to individual projects and the questionnaire should simply have a dropdown list of all the available projects. A second question could be added in order to allow the reviewer to adjust the score, in case the specific batch of strings translated in a given post were easier, harder or on par with the rest of the project as a whole.

I’ve been known to repeatedly say that, so far, no project that we’ve been translating has been hard. At all. Some projects were easy, some were very easy and some were average difficulty, with only some parts that were a little more difficult. But even the average ones were comprised of short, simple enough strings and the only real difficulty was figuring out whether the more technical terms had a translation in the target language or were best left in English. I’ve personally always found answers to these questions through not very deep Google or Wikipedia searches, none of my Translators is a programmer and yet they’ve all managed well enough as well.After that, the only real challenge is consistency, which shouldn’t count as a major ordeal for translators, especially since Crowdin has a built in search engine that allows translator to quickly work out how a certain word was translated before (and adjust for consistency when needed).
Some LMs have also been saying that repeated strings or code should count towards difficulty or, at least, word volume, since it takes time to punch them in anyways. I find this a little ridiculous, since, again, Crowdin has a simple enough tool that allows a translator to copy the original string into the translation box and edit out only the translatable words, ensuring that the code stays the same with little to no effort. Also, translations are automatically saved and come up as clickable options when the string to be translated is close enough to a previously translated one. Again, one click and you’re done.

Of course, my personal experience is limited to the Italian language. I find it difficult to believe that other major languages might have problems finding trace of technical terms in their own language through Google, but I accept that that might not be the case for minor languages, which might (and I stress might) not have a lot of technical literature at their disposal. I guess this is where trust in the individual LMs comes in, unless Utopian brings in external auditors for a professional opinion, as @elear suggested they might do.

What project was this post about? List of all available projects

Was the translation outlined in this post significantly more difficult than the rest of the project? Yes / No

This question seems to have been taken out of the new questionnaire. Instead, a question about project consistency has been added. Considering the current LM payout, I’m not sure that this should be a fixed concern for LMs with regards to every single post, especially when multiple translators and/or LMs are working on the same project. But I agree that it’s an important factor to consider for the project as a whole.

How would you rate the semantic accuracy of the translated text?

This is the only question that more or less worked, but having read @rosatravels’ in-depth analysis on how to best evaluate translation work, I realize we can do so much better. Counting mistakes and differentiating between major and minor mistakes seems like a good middle ground solution, at least to start off.

This is what the new questionnaire proposes, and at the moment I like this option. I’m waiting to read more critiques on it, though,as there might be some drawback I haven’t yet considered.
I don’t understand why we would have both the semantic accuracy question and the count the mistakes ones, though.

All in all, I think the current questionnaire doesn’t come out very well from this analysis. On the other hand I can’t say I like the new proposal very much. Beside everything that I’ve already said above, I don’t understand why perfect answers (like zero mistakes or great post) would still subtract points from the maximum score.
But as i understand it, this was a very rough draft that was shared with the exact purpose of gathering criticism, so I guess the rationale behind each choice will get a chance to be properly explained and dissected going forward. Also, my suggestions are extremely basic. Of course, I’m open to more creative and/or logical solutions.


COMMUNICATION


5623D9CB-1F31-4139-BC8B-D645575FE488.jpeg

Pixabay


I definitely agree with @dimitrisp’s comment on this. We need better communication. Especially with regards to who’s who on the DaVinci and Utopian sides and who’s responsible for what, so that any hurdle or complaint may find not necessarily a immediate solution but at least the right ear.

I’m glad @elear’s already proposed weekly voice meetings on the Utopian channel and I hope I’ll be able to participate in the upcoming one.


LM REWARDS and TRANSLATION PRIVILEGES



Thanks to @pab.ink for his graphic contribution to the Translation category


Before translators started missing their payouts, the main topic of discussion, particularly among LMs, was our payout and the fact that only LMs with a spare could translate. Now, it seems all but forgotten, but I believe the problem remains and it’s bound to come up again once everything else has settled.

LMs receive an upvote of about 8$ per review. Is it enough, is it too low, is it too much? Consider that translators receive an upvote that ranges between 30-50$. Of course, they do the actual translation, so it’s reasonable that they receive more. That’s not the only difference, though.
Translators are free to manage their time as they wish: they shouldn’t disappear, of course, but they can pace themselves, publish when they are ready, and receive their payouts accordingly. Lately, a cap was put on the maximum number of posts allowed, per week. This means that a good translator with enough free time might receive up to roughly 150$ per week for his or her work over three posts. LMs, on the other hand, have an average of three translators each and, if they all publish three posts per week, might end up receiving upvotes for up to roughly 72$ for reviewing nine translations. But this is entirely dependent on their translators’ choices of if and when to publish, without the luxury of pacing themselves because the reviews have to be submitted within three days of a post’s publication.
In the meantime, of course, they need to be available for their translators, the staff, and possibly even other LMs, and their workload might even increase, with time. If, for example, we implement a new questionnaire actually scoring posts for being well written and well presented, LMs should also be able to give appropriate feedback on grammar, syntax, and formatting to their translators.

Of course, LMs are supposed to be the best among their team, so it could be argued that their LM rewards are just icing on the cake of what they might make as translators themselves. But this is only true for teams with more than one LM, because single LMs are not allowed to translate without someone else double checking their work. And even when teams have two LMs, they are not allowed to translate and publish unless there’s no other post pending review.
This makes sense, don’t get me wrong. None of us is an actual professional and all of us might make mistakes: a second pair of eyes checking back on one’s work never hurts. Also, because the whole upvote mechanism needs time to work properly, it’s important that posts are reviewed as soon as possible, which means that reviews logically take priority over translations.
It’s a Catch-22, though. Currently those that should be the most qualified among us are at best extremely limited in their translation privileges and, at worst, prevented from translating altogether, earning far less than all other members of their teams. Which makes very little sense.

As I mentioned at the beginning of this post, we’ve been encouraged to write #iamutopian posts to increase our rewards. But is that really a solution? After all, we signed up to do translation work, not blog posts. Of course, we all have a SteemIt profile, so it can be assumed that we all blog, to some extent. Yet you need material, to blog, and reviewing doesn’t really offer much more inspiration than translating: if we accept that translators don’t have a lot of meat to put on their report table, the same is true for LMs. The vast majority of the work we do is routine, especially on long projects such as Node, where each file is basically the same. I do wish to be rewarded for my work, but not by having Utopian upvote posts that basically say “so, this week I did exactly the same thing as last week, now pay me”.

I don’t really have a quick solution to this, other than maybe suggesting that LMs could be given fixed rewards… a solution which might have it’s own set of drawbacks. I’ve just mentioned all of this because I don’t wish for this particular issue to disappear against the bigger problems were working on right now.

A2852D1D-F3B1-46C2-9827-89CFA8F3562D.png

Thanks to @pab.ink for his graphic contribution to the Translation category


That’s all I could come up with, so far. Thank you for reading, if you’ve reached this far. I’ll keep reading all other observations and feedback with much interest.

Sort:  

Wow! There is a lot to say about this post. I appreciate the effort you put in writing and your concern for the growth of the Translations category. Kudos!

A lot has been said about the improvement of the scoring system, and the CEO has taken the step of making the questionnaire to incorporate more relevant questions that are appropriate.

That said, it would be better to have more options for LM's to choose in the scoring of contributions. ''Yes/No'' as you proposed, in some sections could be a kind of harsh decisions in the scoring of some contributions. (IMO)
About the reward, I think there will be an improvement in the future. The Steem current value is very low which also affect the SP.
Once again, I appreciate your effort. Keep up the good work.

Please note that while the CM hasn't changed the footer, I am not scoring #iamutopian posts based on the questionnaire. They have their own metric, and that will be the case until we go live with the new guidelines and new questionnaire, which will be comprehensive enough to reflect these types of posts.

Your contribution has been evaluated according to Utopian policies and guidelines, as well as a predefined set of questions pertaining to the category.

To view those questions and the relevant answers related to your post, click here.


Need help? Write a ticket on https://support.utopian.io/.
Chat with us on Discord.
[utopian-moderator]

Thank you for your review, @tykee! Keep up the good work!

Loading...

Thank you for your detailed post about the ongoing discussion. I think for the questionnaire there will never be a perfect solution, as it depends mostly on personal taste. I've already given my opinion several times on the questionnaires, so I won't add my two cents here again. However, I like your approach.

A very challenging question is, which solution might be the best for the amount of rewards for the LMs. To be honest, I haven't been working as an LM until now, but I can imagine very well on how the situation looks like. I often wondered why LMs receive such a low reward and in the end, they have a lot of responsibility and have the knowledge to identify even small errors in a translated text. I think the reward for LMs should depend on the number of words reviewed. Another thing I thought about is to make the reward dependent on how many mistakes they corrected because ultimately that's the real work LMs do. Going a little bit deeper, this potentially could cause that some LMs begin to see errors where there are none, just to push his or her reward. So this wouldn't be a good solution either.

I think the roots of this conflict are underlying in the number of translators. Utopian and DaVinci have often said, that their main goal is to grow, but they have overseen that we can only grow our teams if the reward pool grows. I think we should limit the teams, kick out people how are contributing really bad translations several times and recruit if there is a free spot in a team and not on a regular basis. This would ultimately lead to high quality, which should be in everyone's favor. Thank you for reading.

Hi @imcesca!

Your post was upvoted by @steem-ua, new Steem dApp, using UserAuthority for algorithmic post curation!
Your post is eligible for our upvote, thanks to our collaboration with @utopian-io!
Feel free to join our @steem-ua Discord server

Hey, @imcesca!

Thanks for contributing on Utopian.
We’re already looking forward to your next contribution!

Get higher incentives and support Utopian.io!
Simply set @utopian.pay as a 5% (or higher) payout beneficiary on your contribution post (via SteemPlus or Steeditor).

Want to chat? Join us on Discord https://discord.gg/h52nFrV.

Vote for Utopian Witness!