Competency‑based academic writing in upper secondary (BGU): from draft to publication
DOI:
https://doi.org/10.64747/fdgdj573Keywords:
academic writing, formative assessment, analytic rubrics, upper secondary education, GalápagosAbstract
To evaluate the impact of a competency‑based academic writing pathway—guided planning, iterative drafts, rubric‑based formative assessment, and basic editorial preparation—on writing performance and self‑regulation among Upper Secondary (BGU) students in the Galápagos. Quasi‑experimental design with nonequivalent groups (24 classes; n = 782) and T0–T1 measurements (optional T2). We combined open MINEDUC datasets, administrative records (Carmenta), and classroom writing products (versioning and feedback traces). Analyses included difference‑in‑differences and multilevel linear models (GLMM), multilevel mediation via peer/self‑assessment participation, and qualitative triangulation from interviews and comments.The Treatment×Time interaction was ≈ +5.2 points (0–100), with Hedges’ d ≈ 0.46. Larger gains were observed in evidence/citation and rhetorical organization. Full compliance with editorial conventions (APA/IMRaD) increased with OR ≈ 3.1. Self‑regulation improved by 0.26 SD (standardized DiD). Participation in peer/self‑assessment showed partial mediation (standardized indirect effect ≈ 0.12). Effects were consistent across grades and gender, with stronger benefits for the lowest baseline tercile. Qualitative evidence supported rubric‑driven expectation alignment and authorship agency linked to school publication. The “draft‑to‑publication” pathway yields educationally meaningful, transferable improvements in Upper Secondary, especially in higher‑order criteria and the adoption of editorial standards. Institutionalization of scheduled drafting cycles, shared rubrics, and school dissemination channels is recommended. The model integrates process metrics (feedback density/focus, substantial‑revision rate) with reproducible multilevel analysis, offering a feasible route for real‑world evaluations.
References
Angrist, J. D., & Pischke, J.-S. (2009). Mostly harmless econometrics: An empiricist’s companion. Princeton University Press. https://doi.org/10.2307/j.ctvcm4j72
Austin, P. C. (2009). Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Statistics in Medicine, 28(25), 3083–3107. https://doi.org/10.1002/sim.3697
Austin, P. C. (2011). An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behavioral Research, 46(3), 399–424. https://doi.org/10.1080/00273171.2011.568786
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01
Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1), 289–300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x
Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597. https://doi.org/10.1080/2159676X.2019.1628806
Callaway, B., & Sant’Anna, P. H. C. (2021). Difference-in-differences with multiple time periods. Journal of Econometrics, 225(2), 200–230. https://doi.org/10.1016/j.jeconom.2020.12.001
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2.ª ed.). Routledge. https://doi.org/10.4324/9780203771587
de Chaisemartin, C., & D’Haultfœuille, X. (2020). Two-way fixed effects estimators with heterogeneous treatment effects. American Economic Review, 110(9), 2964–2996. https://doi.org/10.1257/aer.20181169
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
Hedges, L. V., & Hedberg, E. C. (2007). Intraclass correlation values for planning group-randomized trials in education. Educational Evaluation and Policy Analysis, 29(1), 60–87. https://doi.org/10.3102/0162373707299706
Karaman, P. (2024). Effects of using rubrics in self-assessment with instructor feedback on pre-service teachers’ academic performance, self-regulated learning and perceptions of self-assessment. European Journal of Psychology of Education. Advance online publication. https://doi.org/10.1007/s10212-024-00867-w
Panadero, E., & Jönsson, A. (2020). A critical review of the arguments against the use of rubrics. Educational Research Review, 30, 100329. https://doi.org/10.1016/j.edurev.2020.100329
Panadero, E., Jönsson, A., & Botella, J. (2017). Effects of self-assessment on self-regulated learning and self-efficacy: Four meta-analyses. Educational Research Review, 22, 74–98. https://doi.org/10.1016/j.edurev.2017.08.004
Panadero, E., Piniero, E., & Fernández-Castilla, B. (2023). Effects of rubrics on academic performance, self-regulated learning, and self-efficacy: A meta-analytic review. Educational Psychology Review, 35(1), 113–152. https://doi.org/10.1007/s10648-023-09823-4
Preacher, K. J., Zyphur, M. J., & Zhang, Z. (2010). A general multilevel SEM framework for assessing multilevel mediation. Psychological Methods, 15(3), 209–233. https://doi.org/10.1037/a0020141
Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55. https://doi.org/10.1093/biomet/70.1.41
González Ortega, M. de J., & Ramos Rosero, M. Y. (2025). Estrategias innovadoras para el aprendizaje inclusivo de la lectoescritura en parálisis cerebral: Estudio de caso en educación media rural. Horizonte Científico International Journal, 3(2), 1–12. https://doi.org/10.64747/m710ss96
Hidalgo, M. A. A., Serrano Toala, C. M., Alava Chasi, J. M., & Quilumba Abril, T. M. (2025). Desempeño lector y equidad en la Educación Básica Superior: Evidencia desde la evaluación Ser Estudiante en el Ecuador. Horizonte Científico International Journal, 3(2), 1–20. https://doi.org/10.64747/cq73z358
Rodríguez Ruiz, M. F., & Posligua Garcia, D. M. (2025). Evaluación formativa con rúbricas digitales en Ciencias Naturales: Impacto en aprendizaje por indagación en 7.º–10.º EGB. Horizonte Científico Educativo International Journal, 1(2), 1–18. https://doi.org/10.64747/emgnq4112
Bates, D., & Walker, S. (2014). Fitting linear mixed-effects models using lme4 (arXiv Version). arXiv. https://doi.org/10.48550/arXiv.1406.5823
Goodman-Bacon, A. (2021). Difference-in-differences with variation in treatment timing. Journal of Econometrics, 225(2), 254–277. https://doi.org/10.1016/j.jeconom.2021.03.014
Pischke, J.-S. (2019). Differences-in-differences (Lecture notes). London School of Economics. https://doi.org/10.2139/ssrn.3453819
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Carmen Virginia Chico Castro , Wuilo Geovanny Valverde Narvaez, Enrique Segundo Maroto Balseca, Marcia Viviana Paucar Taco (Autor/a)

This work is licensed under a Creative Commons Attribution 4.0 International License.
