The PM starts the quality assessment project
File comparison
Mistake classification
Discussion
Final evaluation
Statistics
The system manages whole evaluation process
All assessment data is stored in the database
The editor doesn't have enough time to give feedback to the translator
The agency doesn't have established quality assessment standards
Translators do not learn from their mistakes
Disputes between translators and reviewers are not constructive
Managers spend a lot of time on quality control
There is no objective rating of a translator's work quality
There are no detailed quality statistics for each translator
The agency's management is uninformed regarding internal problems
To resolve all these issues, agencies develop special processes, and hire and train specialized employees. But this inevitably increases costs and creates excessive bureaucracy.
The system does most of the work for you:
The project manager assigns the translator and editor to start the evaluation process
The editor uploads bilingual files, and the system compares them
The editor classifies the mistakes for each correction, and the system counts the points
The translator can add notes for the editor and return the project
The translator receives an evaluation notification and can review the corrections and mistakes
If translator and editor are unable to come to an agreement, then the final score is determined by an arbiter
All corrections, scores and comments are stored in the system and always available
Based on the assessments, the system generates reports for translators, editors, etc.
See a detailed description of the process in our video tutorials and documentation