The ethical complexities of using artificial intelligence in academic research

Dr Sheetal Bhoola.

Dr Sheetal Bhoola.

Image by: SUPPLIED

Published Apr 15, 2025

Share

THERE has been increasing attention given to the use of Artificial Intelligence (AI) at universities and its role in the knowledge economy. Continual debates about concerns regarding the originality of content and its alignment with reality and truth are what many academics are concerned about. Artificial Intelligence is predominantly based on data and algorithms, which can pose a problem when it comes to accuracy, timeliness and contextually relevant details. 

This can become problematic, primarily if research is based in localities that are not predominantly well-known to people and where there has been limited research development and engagement. Some platforms of Artificial intelligence respond with broad brush strokes as we enter a query, and then there are others that are more academically defined and can resource appropriately with citations and appropriate literature.

The challenge now ahead of academics and researchers employed at universities is how we manage to effectively identify and query work presented by students who have used an artificial intelligence platform. The most commonly utilised platform at present is Turnitin, which quickly picks up a percentage of similarity between work published and the student’s work. Its key performance is benchmarked on the identification of plagiarism, but what about the ethics involved when utilising artificial intelligence platforms for academic writing and research outputs? 

Some internationally indexed journals have clearly stipulated that Artificial Intelligence platforms are utilised in the write-up process of the research article.

The beckoning question is how we utilise this resource effectively and ethically. Ethics plays an important role in the research process of a young aspiring academic. 

The methods employed by Artificial Intelligence to retrieve information are based on gathering general conclusions or theories from specific observations or examples that individuals tend to search for. A simple test with Meta A on WhatsApp proved this to me. A search for a public figure was based on broad conclusions rather than truth. This factor is a huge dilemma in the challenge of sustaining ethical and accurate research output.

The algorithm can analyse data, identify trends and patterns and even make decisions for you if probed correctly. The central idea is that it should be used to enhance learning, reasoning and problem-solving skills. 

In addition to its capacity to enhance the human being’s deliverables, the ethical considerations remain and most probably intensify as we become further enmeshed and dependent on the application of artificial intelligence. We need to clearly understand what the ethical challenges are so that we can formulate a manageable countering mechanism. Central to it all is the fact that the information generated by the algorithm tends to be biased and discriminatory. 

Artificial Intelligence systems can bring to the fore the existing societal prejudices and biases, which are often derived from the topics that are searched by citizens. 

In addition, this information is further amplified and can be perceived to be a dominant viewpoint in society, but this may not necessarily be the case. Here we question the capacity of these AI systems to be accurate and fair. The other huge concern is the violation of privacy. The principles of the POPIA Act also need to be aligned with regulating Artificial Intelligence. 

This is still something that needs to be addressed, as at present we have no systematic approach in dealing with artificial intelligence and the violation of privacy. Other major concerns relate to the lack of accountability and transparency when artificial intelligence delivers information. We question where exactly their generated statistics are coming from and how accurate or timely the information is that is presented. 

Job Displacement in all its senses is a reality, as it can eliminate tasks, as individuals can easily delegate to artificial intelligence platforms to do the work for them.

The reality is that at this stage, can we deem this as full, true, and comprehensive research? How inclusive are the algorithms utilised, and where exactly is the information taken from? This can ultimately shape reports and other valuable information needed by researchers who aim to promote development in all spheres of society. 

The challenge is now going forward. How do we limit and account for the misuse of these platforms, and who has the say in stating that a particular individual has abused access to Artificial Intelligence? My perspective is that a standardised screening needs to be conducted after a report is presented, which is primarily based on the collection of data from an Artificial Intelligence platform. 

The challenges are going to be confronting us at full speed soon, and the POPIA Act needs to be inclusive of regulations that still protect human integrity and identity. 

The perception that Artificial Intelligence is accurate and correct is worrying, as students and others need to realise that its formulation of how it retrieves information indicates that it can be biased, discriminatory and in violation of human dignity as well. The key is to strategically implement an approach of how and when we use artificial intelligence platforms. 

 

Dr Sheetal Bhoola is an academic at The University of Zululand and the Director of StellarMaths in Durban.

Related Topics: