Home » Posts » My journey South (Part 1) . . . Tracing developments on Artificial Intelligence in Latin America and the Caribbean

My journey South (Part 1) . . . Tracing developments on Artificial Intelligence in Latin America and the Caribbean

by Barbados Today
5 min read
A+A-
Reset

by Chelceé Brathwaite

While some still consider Artificial Intelligence (AI) to be beyond the grasp of developing countries, our South American neighbours have been shattering that stereotype. AI is being deployed in a number of their endeavours: to speed up artifact findings in Peru; to increase crop yields in Colombian rice fields through AI-powered platforms; to boost security and enhance customer service in Brazil’s banking sector; to create vegan alternatives with the same taste and texture as animal-based foods in Chile’s food industry; to predict school dropouts and teenage pregnancy in Argentina; and to forecast crimes in Uruguay.

Some of the push in AI adoption in these countries has come from academics and researchers, like the ones at the University of Sao Paulo who are developing AI to determine the susceptibility of patients to disease outbreaks; or Peru’s National Engineering University where robots are being used for mine exploration to detect gases; or Argentina’s National Scientific and Technical Research Council where AI software is predicting early onset pluripotent stem cell differentiation.

These and other truths were revealed to me at a Latin America and Caribbean (LAC) Workshop on AI organized by Facebook and the Inter-American Development Bank in Montevideo, Uruguay, in November this year. I was the lone Caribbean participant in attendance, presenting my paper entitled: AI & the Caribbean: A Discussion on Potential Applications & Ethical Considerations, on behalf of the Shridath Ramphal Centre (UWI, Cave Hill).

Defining AI

While AI has no universally accepted definition, it describes machines and systems that can acquire and apply knowledge, and execute intelligent behaviour. Beyond robots and autonomous hardware devices, AI’s application also extends to software-based operations in the virtual world like Siri. At the heart of AI is technology that exhibits cognitive capability.

The term AI was first introduced in 1956, in the context of work done by computer scientists like John McCarthy, Alan Turning and Marvin Minsky. AI’s rise to pre-eminence today can be attributed to a number of factors, including unlimited access to computing power and the growth of Big Data. In the last six decades, the world has witnessed a trillion-fold increase in computing power, and worldwide data is expected to grow from 33 zettabytes to 175 zettabytes by 2025.

Towards AI’s Ethical & Legal Considerations

Despite the hype, adoption of AI provokes a number of ethical and legal questions, like: where should responsibility lie for deaths caused by an autonomous vehicle that intentionally decides to crash? Who should own the copyright on content created by AI, especially in legal regimes where such protection extends only to human-created content? How would you prevent an AI-powered hiring system from exacerbating gender and racial inequalities in specific job roles?

The LAC AI workshop attempted to examine these and other ethical issues, by providing a forum, not for engineers and software developers, but for academics, philosophers and lawyers who debated topics such as those included below:

Data Governance and Privacy

AI runs on lots and lots of data. But, in a context of eroded public confidence in data-collecting organizations and data-consuming technologies, it is questionable whether current data governance frameworks are sufficiently flexible, yet still robust to maintain privacy protection. Moreover, not enough of an understanding exists of the impact of data monetization models on privacy.

One potential solution mooted are data trusts – legal structures providing independent stewardship of data, where third-party users are responsible for using and sharing data in a fair, safe and equitable way. Notwithstanding the model’s challenges, particularly with compliance considerations, data trusts would go some way in allaying concerns on how sensitive data is held and used by AI technologies.

Bias and Discrimination

Amazon’s AI recruiting tool showed bias against women; AI facial recognition systems worked better for white men than black women; and an online chat bot became racist. In all three cases, a European study found a common contributor to be the training data used. This finding questions whether the quantity and quality of our data could avoid biased outcomes, and whether algorithms could be prevented from creating profiles that discriminate against certain social groups.

If AI technologies follow the “garbage in, garbage out” rule, then tackling biased and discriminatory outcomes must begin at the data input level. Here, research points towards controlled distortion of training data, integrating anti-discrimination criteria into classification algorithms, post-processing the classification model once extracted and correcting decisions to maintain proportionality among protected and unprotected groups. Recognition that preconceived ideas derive in part from an algorithm’s design means that algorithms must be “audited” for their susceptibility to discrimination, and that there must be full transparency, even if that itself raises issues of intellectual property and national security.

Rogue AI and Unintended Consequences

The Avengers: Age of Ultron shows a case of AI application gone wrong. While not as dramatic, self-driving cars running red lights and autopilot systems gone wrong portend disastrous consequences.  Even more alarming are reports of autonomous weapons systems being developed to aim and fire at “human enemies”.

Prospects of errant application force the issue of AI design. Some AI R&D guidelines propose incorporation of controllability, transparency, safety and ethics from inception. But the proliferation of non-mandatory guidelines compared to other normative frameworks casts doubt on enforceability and compliance. In turn, the question of who is responsible for unintended consequences arises: should blame be laid at the feet of developer/user, the machine, or neither? Transposing existing regulatory schemes to advanced technological developments also proves challenging. Although product liability traditionally focuses on negligent design/manufacture or breach of duty, AI’s autonomous and evolving nature makes it difficult to identify the point of defect or predict dangerous outcomes – and thus assign liability.

Part 2 will be published in Thursday’s edition.

You may also like

About Us

Barbados Today logos white-14

The (Barbados) Today Inc. is a privately owned, dynamic and innovative Media Production Company.

Useful Links

Get Our News

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Barbados Today logos white-14

The (Barbados) Today Inc. is a privately owned, dynamic and innovative Media Production Company.

BT Lifestyle

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Accept Privacy Policy

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00