OpinionUncategorized #BTColumn – CXC: The big black box by Barbados Today Traffic 12/12/2020 written by Barbados Today Traffic 12/12/2020 12 min read A+A- Reset Share FacebookTwitterLinkedinWhatsappEmail 219 Disclaimer: The views and opinions expressed by this author are their own and do not represent the official position of the Barbados Today. by Dr Juliet Melville The much-anticipated report from the Independent Review Team (IRT), appointed by the Chairman of CXC to review the ensuing regional outcry over the results of the 2020 CXC administered CAPE and CSEC examinations, failed to adequately and clearly address the many queries raised by students, parents, teachers and other concerned persons. The Report is written in a style which cannot be easily understood by the same groups raising the queries, and considerable effort is needed to make sense of its contents. Nevertheless, if one is able to navigate through the Report, one must feel a great sense of discomfort with the IRT findings. Firstly, the Report does not provide a clear explanation for the disparity between the expected grades and the grades that were awarded in the 2020 CAPE and CSEC examinations. It also does not adequately explain the disparity in the 2020 results and the performance of students in previous years and numerous other inconsistencies highlighted by affected students across the region. You Might Be Interested In #YEARINREVIEW – Mia mania Shoring up good ideas I resolve to… Rather than simply attributing the disparity between expected grades and awarded grades to a divergence between the teacher and CXC-marked School-Based Assessments (SBAs), the IRT report points to other possible explanations. In fact, the report raises more questions than it answers and reinforces the case for greater transparency, accountability, and more effective oversight of CXC. The IRT examined and reported on (i) the modified approach for the administration of the July/August 2020 CSEC and CAPE examinations; (ii) the moderation process applied to the School-Based Assessment (SBA) for the July/August 2020 examinations; and (iii) the grading process used for these same examinations. The related findings of the IRT are discussed below. The Modified Approach As a result of the COVID-19 pandemic, CXC used a modified approach for its 2020 examinations. This entailed testing based only on a multiple-choice paper (Paper 1) and the School Based Assessment (Paper 3), instead of the usual three papers examination which included an essay type and extended response or structured questions paper (Paper 2). According to the IRT, CXC undertook “extensive consultations with regional stakeholders” in deciding on the modified approach. The IRT concurred that this was the “best option under the circumstances”. Nevertheless, a pertinent question is whether the use of Paper 1 and Paper 3 was really adequate to assess students given well-known deficiencies in both the SBA process and multiple-choice papers if not properly designed. While the use of SBA is laudable, the challenges associated with these are “legendary and well documented”. Annually, teachers experience enormous difficulties getting children to complete these in a timely manner and up to the required standard. At the start of the shut-down, many schools were lagging in their completion of SBAs. Further, in many instances, SBAs are not designed or conducted in the manner envisaged to yield the intended benefits from the internal assessment of the student over time. Finally, there is no guarantee that the SBAs represent the solo efforts of the student. In the case of multiple-choice assessments, it has been observed that “[O]n too many multiple-choice tests, the questions do nothing more than assess whether students have memorized certain facts and details. But well-written questions can move students to higher-order thinking, such as application, integration, and evaluation”[1]. Over the years, and particularly in 2020, students have commented on the high number of repeat questions – a disservice to students. A well-designed multiple-choice examination can capture fairly accurately the student’s knowledge, understanding and competencies. The question is whether CXC’s multiple choice assessments meet this standard. Given the recognised shortcomings, both Paper 1 and Paper 3 by themselves appear inadequate to truly assess students. Perhaps that is why the IRT reached the qualified conclusion that the modified approach was the “best option under the circumstances”. But was it the best? Many school administrators, initially, expressed grave reservation with the use of the proposed modified approach. All things considered, it is difficult to agree the modified approach was the best option for the assessment of students. Moderation of SBAs We also learnt from the IRT Report that CXC adopted a new approach for the moderation of the 2020 SBAs, and an overall lowering of SBA scores in 2020 is attributed to this. The IRT concluded that the “moderation of all Paper 3 from all schools increased thoroughness and reliability of the 2020 process compared to previous years.” Let’s explore this claim. Prior to 2020, all SBAs were assessed by teachers and these marks together with an Estimated Grade and an Order of Merit – based on the student’s overall performance over the period for the preparation for the examination – were shared with CXC via its Online Registration System. A sample consisting of five students’ works from each centre/school was then selected by CXC for assessment (moderation) to ensure consistency with CXC’s standards (referred to as Centre Moderation). Only the sampled works were required to be submitted to CXC, but CXC could request additional SBAs for assessment if it was deemed necessary. Schools were not required to submit samples for some subjects identified by CXC and one third of the centres/schools offering these subjects were randomly sampled for moderation each year (On Site Moderation). For most subjects, moderation involved the re-marking of the sampled SBAs by CXC approved examiners. The Report is silent on how divergence between the teachers’ marks and CXC moderated marks was previously dealt with. Candidates are reported “ungraded: no SBA received” if the scores are not submitted, or if requested samples of work done is not submitted. For 2020, “moderation across all centres/schools and all subjects” was adopted. Schools were required to upload and submit the students’ marks as well as all SBAs (not just the marks and the five sampled SBAs as in past years) for all subjects. Those subjects that were moderated on-site continued to be done in this way. CXC also reportedly selected a larger number of SBAs for moderation. This comprised three sets (samples) of five SBAs from each school/centre, giving a total of 15 SBAs for moderation/remarking instead of the usual one sample comprising five SBAs from each centre/school. These three samples were remarked sequentially to ascertain consistency between the teacher’s scores and the CXC marked SBAs. According to the IRT, if none of the three samples was found to be in line with the CXC examiner’s marks, all the SBAs from that school/centre were remarked and the CXC marks replaced the teacher’s score. The difference between moderators’ and teachers’ scores in 2020 was higher for CAPE subjects (66 per cent) than for CSEC subjects (36 per cent). Based on the process described in the IRT Report, there does seem to have been an attempt to increase the rigour in moderating the SBAs, including providing better oversight of the CXC examiners. The moderation process was not fundamentally changed, except in so far there was no longer general acceptance of teachers’ awarded scores when there were differences with those of the moderators. The new process would have increased significantly the demands on CXC to process all uploaded SBAs, to moderate the larger samples and to undertake the 100 per cent remarking of SBAs from centres/schools as necessary. The deluge of SBAs from all centres for all subjects must have presented an administrative and logistical challenge for CXC. The IRT reported that the CXC’s Chief Examiners attributed the divergence in marks to teacher related factors. This is an aspect of the explanation and merits further investigation, but the quality of moderation also merits scrutiny. The report failed to shed any light on who exactly moderated the SBAs and whether it was the usual team of examiners used in the past and whether these same persons were responsible for the remarking of all the SBAs from a particular centre/school when this was considered necessary. The report refers to CXC trained markers and also suggests that training of additional SBA moderators and retraining of current SBA markers had to be undertaken. It is well known that since CXC moved from the annual collegial onsite marking of examination scripts, which brought together educators from around the region, many of these veteran examiners no longer participate in this exercise. Hence, the importance of the calibre of persons used in the moderation exercise – their level of experience and familiarity with the teaching and examining the respective subjects matters. The checks and balances to ensure consistency and standardisation in marking across examiners, within and across countries, in line with agreed standards, are also pivotal to the integrity of the moderation process. The IRT did not use the opportunity to probe deeper into the quality of the moderation and remarking exercise for 2020. Grading The IRT report penetrates somewhat the dense darkness that obscures CXC’s grading process. For 2020, CXC awarded CSEC and CAPE grades based only on the multiple-choice paper and the SBA instead of the usual three papers. On the basis of “a qualitative analysis”, CXC concluded that Papers 1 and 3 were adequate to evaluate students’ knowledge, skills and competencies in 26 out of 33 CSEC subjects and alternative papers would have to be used to cover the seven subjects whose profiles were not adequately covered. It is unclear whether a similar exercise was undertaken for CAPE subjects. The concerns with using Papers 1 and 2 alone were discussed above. CXC has insisted that the weight of the various papers in the determination of final grade remained unchanged. This requires some explanation. Assume marks are awarded for Paper 1, Paper 2 and Paper 3 as follows: 60, 100 and 40 accounting for a total of 200 marks for a subject. This yields weights of 30 per cent, 50 per cent and 20 per cent for each paper, respectively. The omission of Paper 2 automatically means that the subject is now scored out of 100 with the contribution of Paper 1 (60 marks) and Paper 3 (40 marks) now accounting for 60 per cent and 40 per cent of the final score, respectively. While the marks accounted for by Papers 1 and 3 remain unchanged, their relative contribution to the final grade changed and therefore their weight in the determination of the final grade must change. Only the ratio between relative contribution of Paper 1 and Paper 3 (3:2) to the total marks and the weights assigned to the profiles covered in a given paper remained unchanged. A more pertinent concern is the manner in which the range (referred to as the “cut points”) for various grades for both CSEC and CAPE was determined in the modified approach. According to the report, these were based on the median score for the subject. The IRT’s observation that “Information on how the ‘cut points’ were established in previous years was not readily available…” is especially disturbing. It is inconceivable that information on how the “cut points’’ were determined in prior years is not available. If information on prior years was not readily available, what is the basis for comparing performance across years? How did the use of the median affect the range associated with particular grades, and did this affect the distribution of grades? The report is also fuzzy on how the profile grades attained in a subject are translated into the subject grade awarded. In fact, the approach described suggests a significant degree of subjectivity in this process, and the logical implication is that the subject grade can vary from year to year depending on how CXC evaluates and interprets the profile grades attained by students. Given all the possible profile grade combinations for a subject are known for both CSEC (A to E) and CAPE (A to F), it is unclear why these are not linked objectively to each of the possible subject grades. One must concur with the IRT’s conclusion that CXC should strive to reduce the degree of subjectivity in the determination of subject grades from profile grades and also make the grading process easier to understand! It would be fair to say that the IRT reached a qualified conclusion on CXC’s grading model, describing it as adequate for the circumstances but highlighting a number of issues which could have impacted the outcome for individual students. Based on the IRT findings, the grading model used by CXC in 2020 was clearly problematic because it skewed the distribution of grades – it reduced the numbers of persons attaining Grades 1 – IV; the overall scores attained on some subjects were found to be weakly correlated with the profile scores, i.e., the grade awarded was inconsistent with the profiles attained in a subject; it used the median value to determine cut points for the various grades, when it is unclear how these ranges were determined in previous years; and the significance variance between the teachers’ and the moderators’ scores. In light of the issues identified with the 2020 grading of the CXC examinations, the IRT found that “there was justification for a review of the examination papers, including re-marking.” Conclusion The IRT report provided much food for thought and going behind the nuanced language one must conclude that CXC played a not insignificant role in the 2020 debacle. While the IRT did not conduct a critical evaluation of the CXC’s examination and oversight process, a careful reading of the report arouses serious concerns about the internal processes of the institution and the level of effective independent oversight of their operations. Even though CXC is an important player in the education landscape of the region, for most, including educators charged with preparing students for its examinations, CXC remains a “Big Black Box”. The IRT report allowed a slither of light to penetrate this darkness. Going forward, in order to redeem the credibility of the CXC and the integrity of its examination process, there needs to be a critical review of its approach to examining and certifying. The statistical methods and technical analyses used to inform its decisions also need thorough interrogation. Dr Juliet Melville is a concerned parent and regionalist. Barbados Today Traffic You may also like Strategies to improve employee health, safety and wellbeing 15/01/2025 Digital transparency and inclusion: India’s blueprint for CARICOM’s tech revolution 14/01/2025 Primary care: Unsung hero in our health crisis 10/01/2025