Home » Posts » Digital deceit and legal delay: The price of inaction in the age of AI

Digital deceit and legal delay: The price of inaction in the age of AI

by Steven Williams
5 min read
A+A-
Reset

On May 26, 2025, Prime Minister Mia Amor Mottley urged the creation of a regional AI policy, stressing the need for Caribbean nations to confront its threats to democracy. Speaking at a summit and later at a BLP by-election celebration, her remarks echoed global concerns: AI offers great potential, but its misuse can be dangerously destabilising.

While strategic in nature, I believe her appeal was also personal, given that she has been repeatedly impersonated using AI—her image misused in online scams posing as legitimate business ventures.

While much of the public discourse around AI centres on job displacement, nowhere is its threat more visible or immediate than in the rise of deepfakes and synthetic media—realistic audio, video, or images generated by AI to deceive, impersonate, or manipulate. Once confined to research labs, these tools are now in the hands of anyone with a smartphone and internet access. Yet legal systems across much of the Caribbean, including Barbados, are struggling to keep pace with the scale, accessibility, and sophistication of these rapidly evolving threats.

The deepfake dilemma

Deepfakes have quickly emerged as one of the most dangerous intersections of artificial intelligence and misinformation. These AI-generated audio, video, and images are so convincing that they can easily be mistaken for authentic recordings. At their core, they exploit our trust in what we see and hear, making them powerful tools for deception especially in the democratic process.

In the political realm, deepfakes can be used to fabricate speeches or statements that never occurred, spreading false narratives and undermining democratic processes. Their use in campaigns, whether to discredit opponents or manipulate public opinion, introduces a new level of volatility and distrust into electoral systems. During the 2025 Trinidad election, I was called in to verify the authenticity of an alleged audio recording attributed to a political leader. In that instance, my analysis supported the authenticity of the recording, yet there were public suggestions that it was AI-generated, illustrating a growing trend where genuine content is dismissed as fake to avoid accountability. This real-world example, reported in the Trinidad Express, highlights not only the threat of synthetic media but also how the perception of AI is now being used as a smokescreen, reinforcing the urgent need for clear, enforceable legislation in the Caribbean.

Equally disturbing is the proliferation of non-consensual synthetic pornography. Both public figures and private individuals have found their likenesses inserted into explicit content, widely shared across digital platforms without consent. I recall that during a previous administration, a prominent political figure became the subject of social media speculation when a video surfaced appearing to show him engaged in a pornographic act. While any discerning eye could tell it wasn’t him, that incident, now more than a decade old, occurred long before today’s AI capabilities. The technology has since advanced by leaps and bounds, making such fabrications far more convincing and far more dangerous. These violations are deeply personal and damaging, especially in jurisdictions where legal protections remain unclear or outdated.

Where the law stands: From misuse to obsolescence

Deepfakes aren’t just a reputational risk—they’re being used for identity fraud, tricking banks, employers, and even families through hyper-realistic impersonations. Even more alarming is the rise of AI-generated child sexual abuse material (CSAM), which, though synthetic, mirrors real harm and challenges existing laws. With these tools now widely accessible, the potential for abuse and lasting damage is escalating rapidly.

Barbados’ current cybercrime law is stuck in the past, like a 20-year-old security system trying to stop hackers armed with AI. The Computer Misuse Act (2005) was drafted long before deepfakes, AI scams, or synthetic fraud became real threats. The 2023 Cybercrime Bill is a critical upgrade, proposing tougher penalties for cyberbullying and, notably, banning AI-generated child abuse material. But like a stalled software update, political gridlock has left it stuck in Parliament while criminals are potentially exploiting the gaps.

The holdup? A single contentious clause triggered such backlash that the entire bill risks being scrapped. But by discarding the package over one dispute, we’ve left the country defenceless against AI-driven threats. A pragmatic fix? Pass the non-controversial provisions now—especially those addressing urgent risks like synthetic CSAM—and return to the debate later.

Even then, the bill still falls short. It fails to define deepfakes, grants no rights over digital likenesses, and overlooks AI-powered political disinformation. Progress? Yes. But with AI evolving daily, Barbados can’t afford half-measures.

Bridging the legal gaps

To effectively govern the misuse of AI, Barbados must establish the right to protect one’s digital identity—including image, voice, and likeness—as a fundamental legal principle. Unauthorised use, whether for deception, defamation, or exploitation, should not fall into grey areas; it must be explicitly criminalised.

Critically, the law must penalise synthetic media misuse based on intent, not just impact. Where the purpose is to deceive, impersonate, or manipulate—especially in political contexts—there must be consequences. Deepfakes used to fake endorsements, fabricate speeches, or distort public behaviour must be outlawed without ambiguity.

Yet, in protecting the public, we must also protect freedom. Legislation must make room for legitimate satire, journalism, education, and artistic expression, provided it respects privacy and public safety. Some digital rights advocates argue that AI-generated content should fall under free expression, particularly in art and political commentary. That position is not without merit, but in the face of synthetic deception, the right to expression must be balanced with the public’s right not to be misled. Barbados’ legal framework must therefore defend free speech without enabling abuse—a framework where innovation and accountability can coexist.

The goal is not to restrict creativity but to ensure that truth, consent, and digital dignity are preserved in an age where reality itself is increasingly under threat. Barbados has a clear path forward; what’s needed now is the political will to take it.

Steven Williams is the executive director of Sunisle Technology Solutions and the principal consultant at Data Privacy and Management Advisory Services. He is a former IT advisor to the Government’s Law Review Commission, focusing on the draft Cybercrime bill. He holds an MBA from the University of Durham and is certified as a chief information security officer by the EC Council and as a data protection officer by the Professional Evaluation and Certification Board (PECB). Steven can be reached at Mobile: 246-233- 0090; Email: steven@dataprivacy.bb

You may also like

About Us

Barbados Today logos white-14

The (Barbados) Today Inc. is a privately owned, dynamic and innovative Media Production Company.

Useful Links

Get Our News

Newsletter

Barbados Today logos white-14

The (Barbados) Today Inc. is a privately owned, dynamic and innovative Media Production Company.

SUBSCRIBE TO OUR NEWSLETTER

Newsletter

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Accept Privacy Policy

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00