OPINION: A response to CXC’s AI Policy clarification

Photo credit: Magnific.com

CXC’s recent clarification regarding artificial intelligence and school-based assessments (SBAs) has provided some reassurance to students, teachers and parents across the region.

Dr Nicole Manning’s explanation that AI originality reports are not intended to function as the sole determinant of academic misconduct was timely, particularly amid growing anxiety surrounding fairness, similarity scores and the role of AI detection software in education.

AI detection systems cannot determine authorship with certainty. They operate on probability, statistical similarity and predictive language patterns. A student can produce fully original work and still be flagged. That limitation is widely recognised in educational technology research and is precisely why human oversight should remain essential within CXC’s framework.

If AI originality reports are not decisive, then what exactly is their operational role within the SBA framework?

This question is urgent, particularly given emerging reports from students and educators who believe they have been wrongly flagged or penalised based on AI similarity scores.

If a system influences outcomes, even indirectly, then its role must be clearly defined, consistently applied and transparently understood.

The SBA model was already grounded in human supervision long before artificial intelligence entered the picture. Teachers guide students through development, monitor their progress, assess their submissions and participate in moderation processes designed to safeguard fairness. Human oversight has always been the foundation of the system. Frankly, this AI policy has added another layer to it.

If originality reports are intended primarily for deterrence, documentation, transparency or early identification of potential misuse, then their inclusion is understandable. No examining body should ignore generative AI or assume it will not be used improperly. Academic integrity is fundamental to the credibility of qualifications.

The difficulty arises when these tools are acknowledged to be imperfect while still being embedded in processes that may influence judgement.

If the technology is not definitive, why are numerical indicators still being operationalised in high-stakes assessment contexts?

Once human interpretation becomes the final safeguard, responsibility shifts more heavily onto the education system itself. Teachers may now be expected to interpret originality reports, review writing development over time, compare drafts, assess contextual evidence and determine whether flagged work reflects misconduct or statistical similarity. This is occurring within an environment where teachers are already managing large workloads, administrative duties, classroom demands and SBA supervision responsibilities.

In effect, AI policy has expanded the interpretive burden of existing systems.

This is part of a broader pattern in education where new expectations are introduced without equivalent adjustments in support structures. Each additional layer increases responsibility at the classroom level, yet conversations about workload rarely extend into meaningful discussions about compensation, resourcing or system capacity.

Teachers are not employed by CXC. They facilitate a regional assessment system while fulfilling their core professional roles within schools. If the integrity of the system now depends more heavily on interpretive labour, then workload sustainability and remuneration cannot remain peripheral issues.

They form part of the structural reality of implementation.

To what extent can fairness be maintained consistently across schools and territories with differing levels of capacity and resources?

Schools operate under different conditions. Some have stronger technological infrastructure and more time for detailed review. Others are working under significant constraints.

Teachers also vary in available time and institutional support, which affects how deeply they can interrogate flagged submissions. This is now a question about capacity.

Fairness cannot be fully realised if interpretation differs significantly based on institutional conditions. A policy that depends heavily on human judgement must also account for uneven distribution of time, resources and support across the region.

Should AI tools be standardised across the system?

Another issue arises from the use of multiple AI detection tools. CXC allows different originality checkers, yet these systems are known to produce different results for the same text. If one tool reports 12 per cent similarity and another reports 28 per cent for the same SBA submission, the question becomes unavoidable: which result carries authority? Without standardisation, consistency becomes difficult to guarantee.

If AI detection is to remain part of the assessment framework, then there is a strong argument for institutional coordination rather than fragmented tool usage. Standardisation would also need to be supported at the level of ministries of education and CXC, ensuring that access to tools is not dependent on school resources or individual capacity. Otherwise, implementation risks becoming uneven across institutions.

Could this widen educational inequality?

Access to technology, stable internet, digital literacy and institutional resources is not uniform across the region. Schools with stronger infrastructure are naturally better positioned to manage AI-related requirements. Under-resourced schools may face greater difficulty implementing the same expectations consistently. Technology does not operate neutrally within unequal systems. It interacts with existing disparities and can reinforce them if safeguards are not in place.

Could AI concerns change how students write?

If students begin to associate polished writing with suspicion, there is a risk that they may begin adjusting their academic expression. This could include simplifying language, avoiding complex structures or second-guessing formal tone. This could be an unintended consequence of a system designed to protect integrity, potentially shifting focus from demonstrating understanding to managing appearance.

Are we responding to AI as a problem of misconduct, or as a fundamental shift in how assessment itself needs to be understood?

At a deeper level, the conversation has really shifted from AI detection to whether the current assessment models remain sufficient in the age of artificial intelligence. Written assessment has long been used as evidence of independent thought. Generative AI complicates this by blurring boundaries between assistance, collaboration and authorship.

For years, educational research has highlighted alternatives such as oral defence, supervised drafting, practical demonstration and real-time evaluation of understanding. These approaches existed long before AI but are now gaining renewed relevance.

The question is whether assessment systems in the Caribbean are evolving quickly enough to reflect this shift.

If AI detection remains central despite acknowledged limitations, then assessment risks relying on tools that are not fully reliable. If human interpretation becomes the primary safeguard, then fairness depends increasingly on institutional capacity and teacher workload. Neither pathway is straightforward.

Balance remains the central challenge for the system. Academic integrity should be safeguarded and misuse addressed, but honest students must not be placed at a disadvantage by systems that policymakers themselves acknowledge are not infallible.

Ultimately, the question is: Are Caribbean education systems ready for what authentic assessment, authentic learning and authentic authorship now demand in a context where the nature of writing itself is shifting?

Dr Zhane Bridgeman-Maxwell is a science educator, researcher, writer and disruptor of outdated education systems in Barbados. Focused on redesigning learning through policy shifts, change management and pedagogical innovation, she amplifies the voices of students, teachers, and parents, while reimagining what school can and should be.

Dr Zhane Bridgeman-Maxwell is a science educator, researcher, writer and disruptor of outdated education systems in Barbados. Focused on redesigning learning through policy shifts, change management and pedagogical innovation, she amplifies the voices of students, teachers, and parents, while reimagining what school can and should be.

Related posts

Inaugural youth football competition to kick off in June

Something’s ‘fowl’ in Haynesville

Memorial held for missing fishermen at Oistins

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Privacy Policy