Assessing the effectiveness of ChatGPT-3.5 and ChatGPT-4o in simplifying Italian institutional texts
DOI:
https://doi.org/10.62408/ai-ling.v2i2.18Keywords:
Large Language Models, ChatGPT, bureaucratic and professional texts, linguistic simplification, human evaluationAbstract
This research aims to describe the performance of ChatGPT-3.5 and ChatGPT-4o in the task of Automatic Text Simplification (ATS) in Italian institutional texts. The aim is to analyse the linguistic differences between the original texts compared to their simplified rewritings by ChatGPT, and the impact of these differences on non-expert users’ experience. A dataset of six short texts was compiled to be rewritten using a zero-shot instructional prompt. The methodological approach combined quantitative linguistic analyses, manual analysis and human judgment to assess the effectiveness of the simplification. For the quantitative linguistic analysis, an additional comparison was made between ChatGPT’s rewritings and human revisions, used as an external benchmark to better contextualize the AI’s simplification strategies. The study provides new insights into the linguistic structure of administrative-bureaucratic texts by examining readability parameters and collecting subjective assessments of comprehension and perceived comprehensibility. It also aims to contribute to the growing body of research on text simplification methods and the role of large language models (LLMs) in enhancing accessibility to complex institutional discourse.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Mariachiara Pascucci, Claudia Gigliotti

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.