International Review of Research in Open and Distributed Learning

Volume 25, Number 3

August - 2024

Does AI Simplification of Authentic Blog Texts Improve Reading Comprehension, Inferencing, and Anxiety? A One-Shot Intervention in Turkish EFL Context

Ferdi Çelik, Ceylan Yangın Ersanlı, and Goshnag Arslanbay
Ondokuz Mayıs University, Samsun, Türkiye

Abstract

This experimental study investigates the impact of ChatGPT-simplified authentic texts on university students’ reading comprehension, inferencing, and reading anxiety levels. A within-subjects design was employed, and 105 undergraduate English as a foreign language (EFL) students engaged in both original and ChatGPT-simplified text readings, serving as their own controls. The findings reveal a significant improvement in reading comprehension scores and inferencing scores following ChatGPT intervention. However, no significant change in reading anxiety levels was observed. Results suggest that ChatGPT simplification positively influences reading comprehension and inferencing, but its impact on reading anxiety remains inconclusive. This research contributes to literature on the use of artificial intelligence (AI) in education and sheds light on ChatGPT’s potential to influence language learning experiences within higher education contexts. The study highlights the practical application of ChatGPT as a tool for helping students engage in authentic text readings by making text more comprehensible. Based on the findings, several multifaceted implications that extend to various stakeholders in the field of language education are provided.

Keywords: artificial intelligence, ChatGPT, simplification, reading, language teaching

Does AI Simplification of Authentic Blog Texts Improve Reading Comprehension, Inferencing, and Anxiety? A One-Shot Intervention in Turkish EFL Context

The field of higher education is increasingly recognizing technology’s potential to elevate language learning experiences (Hong, 2023; Kalla et al., 2023; Kohnke et al., 2023). Artificial intelligence (AI), a cornerstone of modern life, offers significant opportunities for both teachers and students (Y. Chen et al., 2020; Holmes et al., 2023). AI, including AI in education (AIEd), has transformed pedagogy and human audiovisual literacy, influencing language learning practices (Yang et al., 2021). These advancements underscore a shift in educational paradigms, where AI emerges as a key facilitator in creating immersive and effective language learning environments. The integration of AIEd signifies a transformative era, enriching the educational landscape and shaping new possibilities for language learners and educators alike.

In line with this transformative trend, the dynamic nature of AI is evident in its continuous evolution and the introduction of diverse platforms, such as intelligent tutoring systems, teaching robots, and adaptive learning systems (Y. Chen et al., 2020; Yi et al., 2022). This advancement, exemplified by the merging of generative AI and large language models such as Chat Generative Pre-Trained Transformer (ChatGPT), contributes significantly to shaping language learning experiences (Lund & Wang, 2023; Mhlanga, 2023; Pavlik, 2023; Pogla, 2023). ChatGPT, acknowledged as a sophisticated language generation model, facilitates natural language discussions through machine-learning techniques (Brown et al., 2022; Dida et al., 2023; Susnjak & Maddigan, 2023).

Despite some exploration of ChatGPT’s educational potential, its in-depth impact on language learning in higher education remains unexplored. This study addresses this research gap, contributing to the existing scholarly literature by investigating AI’s broader capabilities and potential benefits. For this reason, the research questions specifically examine the impact of ChatGPT on university students’ reading comprehension, inferencing performance, and reading anxiety levels when simplifying an authentic text on a life advice Website. In line with expectations, ChatGPT’s role in fostering effective language practices within higher education is explored.

AIEd: Unraveling ChatGPT’s Impact

Within the ever-changing realm of AIEd, the influence of advanced language models on language learning experiences is a central point of exploration. Ouyang and Jiao (2021) provide a comprehensive overview of AIEd development, emphasizing three paradigms where AI techniques address educational challenges, including learner agency, personalization, reflective learning, and a learner-centered, data-driven approach. Moreover, AI demonstrates proficiency across natural language tasks, from generating essays to translations and answering questions (Rospigliosi, 2023). The rapid growth of natural language processing technology recognizes large language models as a significant evolution. ChatGPT, a sophisticated generative language model, proves valuable in enhancing critical thinking, academic research, writing, and problem-solving skills (Dwivedi et al., 2023; Sullivan et al., 2023). Its excellence in generating original content and providing students with a comprehensive understanding and analysis of specific subjects (Kasneci et al., 2023; Tlili et al., 2023) underscores its central role in shaping language learning experiences.

Based on initial findings (Bin-Hady et al., 2023; Holmes et al., 2023), ChatGPT emerges as an adaptable simplification tool in language education. It not only contributes to personalized tutoring, automatic grading of writing, deep learning, and adaptive instruction (X. Chen et al., 2020; Chen & Hsu, 2022; Kim et al., 2019; Pang et al., 2021) but also acts as a scaffold for learning. It offers constructive feedback and functions as a collaborative partner in language practice.

As this study progressed, it aimed to explore the intricacies of reading dynamics involving comprehension, inferencing, and anxiety, assessing the unique advantages that ChatGPT offers and its potential contributions to second- and foreign-language learning.

Reading Dynamics

As highlighted by Hu and Nassaji (2014), reading plays a crucial role in vocabulary acquisition for both second-language (L2) and foreign-language (FL) learners, emphasizing the significance of language development. This importance becomes particularly evident during the inferencing process, where readers employ diverse strategies and background knowledge (Hu & Nassaji, 2014). Subsequently, the development of reading skills initiates with decoding and word fluency, evolving over time to encompass the ability to make inferences (Bayat & Çetinkaya, 2020). These inferences, in turn, function as facilitators to ensure a thorough grasp of the text. Moreover, Kispal (2008) identifies inferencing ability as one of the core comprehension skills in the context of reading: it empowers readers to establish meaningful connections between explicit information in the text and implicit ideas. In other studies (Haastrup, 2008; Wesche & Paribakht, 2009), the process of inferencing is delineated as guessing the meaning of an unfamiliar word or, alternatively, as “reading between the lines.” At this crucial intersection, ChatGPT significantly contributes to this process. Operating as an adaptive simplification tool, ChatGPT can be applied judiciously to enhance comprehension. Its effectiveness is evident in its ability to break down complex sentences, use simpler vocabulary, and provide additional contextual information (Pogla, 2023). By supporting readers in making inferences and capturing the main ideas of a text, these approaches aim to alleviate potential reading anxiety, providing a more accessible pathway to comprehension.

Furthermore, to understand the interplay between ChatGPT’s impact on language learning and reading dynamics, the associations revealed by the Corpus of Contemporary American English (COCA, n.d.) were explored. The term reading most frequently collocates with comprehension (2,910), student (1,852), and skill (1,404). Notably, there are no identified collocations involving anxiety, which could be attributed to a gap in the literature concerning reading anxiety. Saito et al. (1999) observed that despite reading’s substantial role in the L2 curriculum, there has been relatively little discussion of anxiety in second-language reading. This gap in the literature highlights the need for a closer examination of reading anxiety, particularly within the context of AI-enhanced language learning.

Factors in FL reading, such as negotiating unfamiliar scripts and encountering unfamiliar cultural material, may pose challenges and evoke anxiety among FL readers (Saito et al., 1999). Detecting reading anxiety is challenging, particularly for silent reading, as it does not necessitate immediate reactions, unlike oral communication (Chow et al., 2021). Navigating the complexities of determining whether authentic texts are comprehensible for students, we encounter diverse terminology associated with authenticity, including terms including genuine, authentic, real, natural, semi-authentic, simulated, and simulated-authentic (AbdulHussein, 2014). In this intricate landscape, Tomlinson (2004) provides insightful perspectives, defining an authentic text as one not created for language teaching purposes. Examples abound, ranging from newspaper articles, rock songs, novels, and radio interviews to traditional fairy stories. Additionally, Tomlinson emphasizes the integral role of authentic tasks, engaging learners in language use reflective of real-world applications beyond the language classroom.

The literature underscores a strong belief in ChatGPT’s integration into language learning for addressing challenges and alleviating anxiety, thereby having a positive impact on learners and marking a transformative step forward in language education. In pursuit of a deeper understanding, the research questions guide the investigation, aligning with the broader context of enhancing language learning experiences through ChatGPT, an advanced AI model. To this end, the following are the research questions of this study:

  1. Does using ChatGPT to simplify an authentic text on a life advice Website affect university students’ reading comprehension compared to reading the original text without ChatGPT simplification?
  2. Does using ChatGPT for text simplification influence university students’ inferencing scores?
  3. How does the use of ChatGPT for text simplification influence university students’ reading anxiety levels?

Method

Research Design

As this study focuses on investigating the impact of ChatGPT simplification of authentic texts, a within-subjects design was appropriate (Keren, 2014). This type of experimental design allowed for the examination of within-participant changes by exposing each participant to both conditions: reading the original text and reading the ChatGPT-simplified text (Lottridge et al., 2011). The exposure to both conditions provided a basis for assessing the effectiveness of the intervention. Each participant served as their own control as their reading comprehension performance and anxiety levels were measured before and after the intervention. Moreover, a one-shot (single-session) intervention (DeBacker et al., 2018, p. 712) was used in this study as it can be equally effective as more extended interventions for academic achievement (Walton & Cohen, 2011) and stress response (Crum et al., 2013).

Study Context

The study took place at a public university in Türkiye. The university had a preparatory school where students were taught English as a foreign language (EFL) for academic purposes for one academic year. At the beginning of their university education, students took a proficiency test, which assessed their English language skills at the B1 (equivalent to pre-intermediate/intermediate) level according to the Common European Framework of Reference for Languages (CEFR). The students who failed the exam took preparatory classes to learn English for academic purposes. These classes aimed to get students to reach the B1 level by the end of the year. The students had 26 hours of English weekly, and the classes were divided into speaking and listening (5 hours/week) and reading and writing (5 hours/week). In the skills lessons, the Oxford Q Skills coursebook series was used. There was also a 16-hour main course where students were taught the basics of English, such as the use of English, including vocabulary and grammar. In the main course, the Oxford Headway series was used.

Participants

The participants were purposefully sampled from the pool of students enrolled in an undergraduate program at a university in Türkiye. Potential participants were identified on the criterion of enrollment in an EFL course. They were asked for consent to participate in the study; 112 students expressed informed consent. However, seven participants withdrew from the study due to unforeseen circumstances, which resulted in a final sample size of 105 participants (45 male, 60 female), ranging in age from 18 to 24.

Data Collection Tools

Demographics Survey

The survey collected data on participant demographics, such as age and gender. It also asked whether the participants used any tools for understanding challenging texts or if they read authentic texts, considering that these prior experiences might influence their performance in the study.

Reading Comprehension Test

The reading comprehension test (RCT) was developed by the researchers based on the original text used in the study. The test was reviewed by three English-language teachers, who reached consensus that it would effectively measure students’ comprehension. The test consisted of 10 multiple-choice questions with four options each. Each question was worth 10 points, and a perfect score was 100. The participants took the test immediately after both interventions (Table 1). A specific question (inferencing item) was created to investigate the effect of using AI simplification on learners’ inferencing scores.

Foreign Language Reading Anxiety Scale

The foreign language reading anxiety scale (FLRAS), developed by Saito et al. (1999), is a 20-item instrument structured as a 5-point Likert-type scale, ranging from strongly disagree to strongly agree. It was used to measure anxiety levels experienced by participants while engaging with foreign-language reading materials (Mikami, 2019). For the present study, the scale had a satisfactory internal validity (Cronbach’s alpha) for both the first intervention (.827) and second intervention (.849). The test was used after both interventions (Table 1).

Table 1

Data Collection Tool Delivery Times

Instrument Pre-intervention First intervention Second intervention
Demographics survey X
RCT X X
FLRAS X X

Note. RCT = reading comprehension test; FLRAS = foreign language reading anxiety scale.

Procedure

To select an authentic text, the researchers focused on the theme “life” as it was a general topic that was thought to be interesting for the participants. A ranked list of 100 best life blogs created by FeedSpot (2024), based on criteria such as Website traffic, social media followers, and timeliness of content, was used. Next, Google’s true random number generator was used to select a blog, and a life advice blog was selected. The instructor who would deliver the reading class was asked to choose a blog post from the Website, considering students’ interests. Finally, an authentic text was acquired to be used in the study. The instructor created a WhatsApp group before the intervention to easily deliver the link for the authentic text.

Control Intervention

The link for the authentic text was shared with the participants. Participants were asked to read the authentic text from the life advice Website carefully. They had 20 minutes to read the text. Following the reading, participants completed the FLRAS and the RCT.

One-Shot Experimental Intervention

The instructor sent the whole text as a WhatsApp message to the participants. ChatGPT 3.5 was used to generate a simplified version of the authentic text. The participants were asked to go to ChatGPT. Next, they were asked to use the following prompt: “Make this text comprehensible for an a2 level learner. Put the text here.” The participants then read the simplified version generated by ChatGPT. They had 20 minutes to read it. Next, they were asked to read the authentic text again, using the link to the original Website. After reading the simplified version and revisiting the original text in the blog, participants again completed the FLRAS and the RCT.

Data Analysis

The data collected for this study underwent analysis to address the research questions using SPSS v.26 for Windows. Following the within-subjects design, normality tests (Shapiro-Wilk) were employed to assess data distribution. Non-normally distributed data were then analyzed using the Wilcoxon signed-rank test.

Findings

Demographics

To provide insight into how learners perceived the difficulty of the text used in this study, all participants were asked to rate the difficulty of the text on a scale ranging from 0 (not difficult) to 10 (extremely difficult). On average, the participants perceived the text to be moderately difficult (M = 6.06); there was some variability in individual perceptions (SD = 3.159).

In response to the question “Do you read English texts from original resources such as news sites, blogs, magazines, etc. (e.g., PubMed, BBC News)?” 40 participants (38.1%) responded “Yes” and 65 participants (61.9%) chose “No.”

Participants were also surveyed regarding the technologies they used to enhance their understanding of original texts. The majority of respondents (63.8%) reported using Google Translate for this purpose. Additionally, 26.7% employed online dictionaries, while 25.7% relied on mobile phone dictionary apps. Notably, 14.3% indicated that they did not employ any additional technologies to improve their comprehension of original texts. Participants were also given the opportunity to specify other technologies they might use; however, no such information was provided. These findings offer valuable insights into the diverse tools participants use to augment their reading comprehension.

Reading Comprehension

To address the first research question regarding the impact of using ChatGPT to simplify an authentic text on university students’ reading comprehension, we performed a series of statistical tests. Initially, Shapiro-Wilk normality tests were conducted, which revealed non-normal distributions for both pretest and posttest results (pretest: W = .953, p = .001; posttest: W = .964, p = .006). Consequently, we turned to the Wilcoxon signed-rank test to analyze the data further (Table 2).

Table 2

Wilcoxon Signed-Rank Test for the Reading Comprehension Test

Posttest-pretest N M rank Sum of ranks Z p
Negative ranks 2 21.50 43 -8.142 < .001*
Positive ranks 87 45.54 3,962
Ties 16
Total 105

* p < .05.

A Wilcoxon signed-rank test was conducted to assess the differences between pretest and posttest scores (Table 2). Descriptive statistics revealed a pretest mean of 42.67 (SD = 26.101) and a posttest mean of 60.00 (SD = 24.690). The Wilcoxon test indicated a significant difference between posttest and pretest scores, with a Z value of -8.142 (p < .001, two-tailed). The negative Z value suggests a statistically significant improvement in posttest scores, which implies a positive impact of the intervention.

Inferencing

Concerning the second research question, concerning the influence of using ChatGPT for text simplification for university students’ inferencing scores, we carried out similar analyses as described above. First, we applied normality tests to the inferencing item pretest and posttest results. The results indicated non-normally distributed scores for both conditions, with a Shapiro-Wilk statistic of.599 (p = .000) for the pre-condition and.629 (p = .000) for the post-condition. Subsequently, a Wilcoxon signed-rank test was employed to evaluate differences between pre- and posttest scores (Table 3).

Table 3

Wilcoxon Signed-Rank Test for Inferencing Item

Posttest-pretest N M rank Sum of ranks Z p
Negative ranks 15 27.50 412.50 -3.266 < .001*
Positive ranks 39 27.50 1,072.50
Ties 51
Total 105

* p < .05.

Descriptive statistics revealed a pre-intervention mean of 3.43 (SD = 4.769) and a post-intervention mean of 5.71 (SD = 4.972) (Table 3). The Wilcoxon test indicated a significant difference between post- and pretest scores (Z = -3.266, p = .001, two-tailed). The negative Z value implies a statistically significant improvement in posttest scores that suggests a positive impact of the intervention on participants’ inferencing abilities.

Reading Anxiety

To address the third research question, related to the impact of ChatGPT-simplified text on university students’ reading anxiety levels, we initially examined the normality of the data with Shapiro-Wilk statistics (pretest = .989, posttest = .973). The test results suggested that neither the pretest nor the posttest results followed a normal distribution. Therefore, we opted for the Wilcoxon signed-rank test to evaluate the differences (Table 4).

Table 4

Wilcoxon Signed-Rank Test for Reading Anxiety

Posttest-pretest N M rank Sum of ranks Z p
Negative ranks 58 53.79 3,120 -1.265 .206*
Positive ranks 46 50.87 2,340
Ties 1
Total 105

* p > .05.

Results of the Wilcoxon signed-rank test did not reveal a statistically significant difference in reading anxiety levels between pretest and posttest conditions (Z = -1.265, p = .206) (Table 4). This suggests no significant change in reading anxiety levels following the intervention.

Discussion

It is evident that AI is having a significant impact on modern life, especially in the field of education. OpenAI and ChatGPT are remarkable examples of AI technology that can bring about revolutionary advancements in the field of education. This accessibility of these resources encourages the development of AI-powered solutions that are customized to meet various educational needs, which can improve learning by making it more individualized, efficient, and easily accessible (Hwang et al., 2020). ChatGPT promotes learner autonomy and makes language learning easier by answering questions intelligently and without requiring users to wait for assistance (Taecharungroj, 2023). Furthermore, AI-powered chatbots offer language assistance and encourage frequent conversation practice. As evidenced by research findings, these chatbots have proven effective in promoting language learners’ overall language development as well as serving as companions for learners to engage in conversational practice (Jeon et al., 2023).

However, ChatGPT must be understood by researchers, educators, and students in a way that sets it apart from classical AI, chatbots, and information systems. First, it is more than just an intelligent system providing learning content, individualized support, or direction. Second, it performs better than a chatbot that can hold students’ attention through natural language communication. Finally, it goes beyond a writing assistant.

To this end, the current study aimed to investigate the effectiveness of using ChatGPT as a learning assistant/scaffolder by focusing on whether the use of ChatGPT to simplify authentic texts may affect university students’ reading comprehension, inference skills, and anxiety levels when reading authentic texts. A text may undergo simplification in regard to certain grammatical elements, cultural references, and word choice. Essentially, a simplified text has been adjusted from its original version or crafted explicitly for L2 learners. Learners may require simplification to align with the teaching and learning objectives and for better comprehension. The findings gathered from this study indicated a statistically significant difference between students’ authentic text comprehension scores and ChatGPT-simplified text comprehension scores. This finding is contrary to those in the existing literature. Soma et al. (2015) investigated the effect of authentic and simplified texts on reading literacy and vocabulary mastery. Their findings suggest that neither text type was superior to the other in terms of comprehensibility for both high- and low-level achievers. Another study (Gashti, 2018) focused on the effect of authentic and simplified literary texts on the reading comprehension of EFL learners. The results indicated that incorporating both simplified and authentic literary texts had a beneficial impact on the reading comprehension abilities of EFL learners. The investigation also revealed no discernible difference between simplified literary texts and the actual literary materials. Stated differently, language learners’ reading comprehension was significantly improved by both simplified and authentic literary resources. Yet, there appeared to be a slight difference, with simplified texts being favored, which is in line with our study findings. In a similar vein, Crossley et al. (2007) stated that although no meaningful difference was found between authentic and simplified texts in reading comprehension, simplified texts were favored because they provided students with more common words and less syntactic complexity.

This study’s findings also suggest a notable enhancement in posttest scores, indicating a statistically significant improvement in participants’ inferencing abilities as a result of the intervention. Inference making is often acknowledged as a vital component of skilled reading (Cain et al., 2001; Graesser et al., 1994; Laufer, 2020; Oakhill & Cain, 2007). It might be difficult for EFL students to understand authentic texts, especially when they include complicated information. Making inferences from authentic texts is a crucial component of reading comprehension. These inferences are usually automatically done as part of the interpretation process, in which readers use what they already know about the text to figure out the meaning. To sustain understanding, students must constantly create new information or rely on what they already know to fill in the details provided by the text. Therefore, in our current study, reading the ChatGPT-simplified texts before reading the authentic texts may have given the students a general idea about the text content, so that they had the necessary background knowledge, which may have helped them infer the meanings of unknown words in the text.

Another finding of the present study is that there was no statistically significant difference in students’ reading anxiety while reading authentic text or while reading ChatGPT-simplified text. Anxiety is recognized as a significant factor affecting students’ reading comprehension. It can stem from emotional and physical stress. According to the literature, higher levels of reading anxiety may lead to misinterpretations and negative feelings (Dewi & Pramerta, 2021; Jalongo & Hirsh, 2010; Saito et al., 1999). However, it should also be noted that anxiety may have a facilitating effect on students’ reading performance. Meymeh et al.’s (2010) study revealed the facilitating effect of anxiety on university students’ reading performance of texts that were lexically and grammatically simplified. In the current study, no statistically significant difference was found between the students’ reading anxiety levels while reading the authentic texts and then rereading the authentic texts after reading the ChatGPT-simplified versions.

Limitations

This study had several limitations. It was conducted at a single university in Türkiye, which limited its representativeness. The focus on undergraduate students in an EFL course may have restricted generalizability to a more diverse learner population. The exclusion of higher-proficiency students and reliance on a single text may also impact its applicability. The short 20-minute intervention time may not capture long-term impacts. Reading anxiety is a complex psychological construct influenced by various individual and situational factors. The study might not have captured the full complexity of reading anxiety or adequately assessed its long-term effects, as changes in psychological domains typically require sustained interventions over time. The lack of long-term follow-up is also a notable limitation. Future studies addressing these issues would contribute to the literature. Addressing these limitations through larger sample sizes, longer intervention periods, and diverse participant demographics can enhance the applicability of future studies in this area.

Conclusion and Suggestions

The study highlighted the positive impact of using ChatGPT to simplify authentic texts on university students’ reading skills. Specifically, the findings revealed improvement in both reading comprehension and reading inferencing abilities among participants who engaged with ChatGPT-simplified texts. Notably, the observed higher posttest scores shed light on the effectiveness of ChatGPT as an educational tool, which suggests that it has the potential to enhance language learning experiences.

The study did not identify any notable change in overall students’ reading anxiety levels when participants used ChatGPT-simplified texts. This finding indicates that while ChatGPT may contribute positively to L2 reading, it does not have a substantial effect on reducing anxiety associated with engaging with original, more complex texts in single-shot interventions. The reasons behind this may be that the students may have gathered the main idea from ChatGPT’s simplified version of the text with simpler words and less complicated grammatical structures and that it may have had a slight effect on their anxiety levels arising from the unknown when rereading the authentic text. Future studies with long-term interventions are needed.

The study’s results present a notable departure from some existing literature that posits no significant difference between authentic and simplified texts. This highlights the need for further exploration into the multifaceted factors influencing language learning outcomes and the impact of AI-driven tools in educational settings. Researchers are encouraged to delve deeper into the relationship between AI-assisted tools like ChatGPT and changes in students’ reading anxiety and also to explore other underlying psychological mechanisms.

The implications of this research extend to the integration of educational technology as the positive impact of ChatGPT suggests its potential incorporation into language learning platforms. English-language teachers may face reading materials that are above learners’ proficiency levels, and AI simplification of these texts would be helpful for learners’ comprehension. Educators may need to adapt their pedagogical approaches using AI technologies while maintaining a balanced instructional strategy. Furthermore, developers of language learning materials could explore the potential to create AI-simplified versions to accompany authentic texts to cater to diverse learners with varying proficiency levels and contribute to a more adaptive learning environment. The positive implications of ChatGPT in language education suggest a broad trend toward the continuous integration of technology in the learning process. Educational institutions should consider investing in and adopting cutting-edge technologies to stay abreast of advancements, promoting innovation in teaching methodologies. Overall, this study contributes valuable insights that inform the ongoing dialogue on the role of AI in language education.

References

AbdulHussein, F. R. (2014). Investigating EFL college teachers’ and learners’ attitudes toward using authentic reading materials in Misan. Procedia—Social and Behavioral Sciences, 136, 330-343. https://doi.org/10.1016/j.sbspro.2014.05.338

Bayat, N., & Çetinkaya, G. (2020). The relationship between inference skills and reading comprehension. Education and Science, 45(203), 177-190. https://doi.org/10.15390/EB.2020.8782

Bin-Hady, W. R. A., Al-Kadi, A., Hazaea, A., & Ali, J. K. M. (2023). Exploring the dimensions of ChatGPT in English language learning: A global perspective. Library Hi Tech. Advance online publication. https://doi.org/10.1108/LHT-05-2023-0200

Brown, H., Lee, K., Mireshghallah, F., Shokri, R., & Tramèr, F. (2022, June 21-24). What does it mean for a language model to preserve privacy? In FaccT ’22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 2280-2292). Association for Computing Machinery. https://doi.org/10.1145/3531146.3534642

Cain, K., Oakhill, J. V., Barnes, M. A., & Bryant, P. E. (2001). Comprehension skill, inference-making ability, and their relation to knowledge. Memory & Cognition, 29(6), 850-859. https://doi.org/10.3758/BF03196414

Chen, H.-R., & Hsu, W.-C. (2022). Do flipped learning and adaptive instruction improve student learning outcome? A case study of a computer programming course in Taiwan. Frontiers in Psychology, 12, Article 768183. https://doi.org/10.3389/fpsyg.2021.768183

Chen, X., Xie, H., Zou, D., & Hwang, G.-J. (2020). Application and theory gaps during the rise of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1, Article 100002. https://doi.org/10.1016/j.caeai.2020.100002

Chen, Y., Chen, Y., & Heffernan, N. (2020). Personalized math tutoring with a conversational agent (arXiv preprint: arXiv:2012.12121).

Chow, B. W.-Y., Mo, J., & Dong, Y. (2021). Roles of reading anxiety and working memory in reading comprehension in English as a second language. Learning and Individual Differences, 92, Article 102092. https://doi.org/10.1016/j.lindif.2021.102092.

Corpus of Contemporary American English (COCA). (n.d.). Retrieved January 27, 2024, from https://www.english-corpora.org/coca/

Crossley, S. A., McCarthy, P. M., Louwerse, M. M., & McNamara, D. S. (2007). A linguistic analysis of simplified and authentic texts. The Modern Language Journal, 91, 15-30. https://doi.org/10.1111/j.1540-4781.2007.00507.x

Crum, A. J., Salovey, P., & Achor, S. (2013). Rethinking stress: The role of mindsets in determining the stress response. Journal of Personality and Social Psychology, 104(4), 716-733. https://doi.org/10.1037/a0031201

DeBacker, T. K., Heddy, B. C., Kershen, J. L., Crowson, H. M., Looney, K., & Goldman, J. A. (2018). Effects of a one-shot growth mindset intervention on beliefs about intelligence and achievement goals. Educational Psychology, 38(6), 711-733. https://doi.org/10.1080/01443410.2018.1426833

Dewi, P. H. R., & Pramerta, G. P. A. (2021). Correlation between anxiety and reading comprehension: A study in a secondary school. Academic Journal on English Studies, 1(2), 152-161. https://e-journal.unmas.ac.id/index.php/ajoes/article/view/4606

Dida, H. A., Chakravarthy, D. S. K., & Rabbi, F. (2023). ChatGPT and big data: Enhancing text-to-speech conversion. Mesopotamian Journal of Big Data, 2023, 33-37. https://doi.org/10.58496/MJBD/2023/005

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A.M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M.A., Al-Busaidi, A.S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., Carter, L., & Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

FeedSpot. (2024, May 20). 100 best life blogs and websites in 2024. Lifestyle Bloggers Database. https://lifestyle.feedspot.com/life_blogs/

Gashti, Y. B. (2018). The effect of authentic and simplified literary texts on the reading comprehension of Iranian advanced EFL learners. Iranian Journal of English for Academic Purposes, 7(2), 32-44. https://e-journal.unmas.ac.id/index.php/ajoes/article/view/4606

Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing inferences during narrative text comprehension. Psychological Review, 101(3), 371-395. https://doi.org/10.1037/0033-295X.101.3.371

Haastrup, K. (2008). Lexical inferencing procedures in two languages. In D. Albrechtsen, K. Haastrup, & B. Henriksen (Eds.), Vocabulary and writing in a first and second language: Processes and development (pp. 67-111). Palgrave Macmillan. https://doi.org/10.1057/9780230593404_3

Holmes, W., Bialik, M., & Fadel, C. (2023). Artificial intelligence in education. In C. Stückelberger & P. Duggal (Eds.), Data ethics: Building trust: How digital technologies can serve humanity (pp. 621-653). Globethics Publications. https://doi.org/10.58863/20.500.12424/4276068

Hong, W. C. H. (2023). The impact of ChatGPT on foreign language teaching and learning: Opportunities in education and research. Journal of Educational Technology and Innovation, 5(1), 37-45. https://jeti.thewsu.org/index.php/cieti/article/view/103

Hu, H. C. M., & Nassaji, H. (2014). Lexical inferencing strategies: The case of successful versus less successful inferencers. System, 45, 27-38. https://doi.org/10.1016/j.system.2014.04.004

Hwang, G. J., Xie, H., Wah, B. W., Gasevic, D. (2020). Vision, challenges, roles and research issues of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1, Article 100001. https://doi.org/10.1016/j.caeai.2020.100001

Jalongo, M. R., & Hirsh, R. A. (2010). Understanding reading anxiety: New insights from neuroscience. Early Childhood Education Journal, 37(6), 431-435. https://doi.org/10.1007/s10643-010-0381-5

Jeon, J., Lee, S., & Choe, H. (2023). Beyond ChatGPT: A conceptual framework and systematic review of speech-recognition chatbots for language learning. Computers & Education, 206, Article 104898. https://doi.org/10.1016/j.compedu.2023.104898

Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., ... Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, Article 102274. https://doi.org/10.1016/j.lindif.2023.102274

Keren, G. (2014). Between- or within-subjects design: A methodological dilemma. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Vol. 1. Methodological issues (pp. 257-272). Psychology Press. https://doi.org/10.4324/9781315799582-10

Kim, S., Park, J., & Lee, H. (2019). Automated essay scoring using a deep learning model. Journal of Educational Technology Development and Exchange, 2(1), 1-17.

Kispal, A. (2008). Effective teaching of inference skills for reading: Literature review (Research report DCSFRR031). National Foundation for Educational Research. https://eric.ed.gov/?id=ED501868

Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 54(2), 537-550. https://doi.org/10.1177/00336882231162868

Laufer, B. (2020). Lexical coverages, inference unknown words and reading comprehension: How are they related? TESOL Quarterly, 54(4), 1076-1085. https://doi.org/10.1002/tesq.3004

Lottridge, S. M., Nicewander, W. A., & Mitzel, H. C. (2011). A comparison of paper and online tests using a within-subjects design and propensity score matching study. Multivariate Behavioral Research, 46(3), 544-566. https://doi.org/10.1080/00273171.2011.569408

Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi Tech News, 40(3), 26-29. https://doi.org/10.1108/LHTN-01-2023-0009

Meymeh, M. H., Rashtchi, M., & Mohseni, A. (2019). Anxiety effect: A case of text modification and the effect of high and low anxiety levels on medical students’ comprehension performance. Journal of Paramedical Sciences, 10(1), 20-26. https://doi.org/10.22037/jps.v10i1.24111

Mhlanga, D. (2023, February 11). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning. SSRN. https://doi.org/10.2139/ssrn.4354422

Mikami, H. (2019). Reading anxiety scales: Do they measure the same construct? Reading in a Foreign Language, 31(2), 249-268. https://eric.ed.gov/?id=EJ1232213

Oakhill, J., & Cain, K. (2007). Issues of causality in children’s reading comprehension. In D. S. McNamara (Ed.), Reading comprehension strategies: Theories, interventions, and technologies (pp. 47-72). Lawrence Erlbaum Associates Publishers.

Ouyang, F., & Jiao, P. (2021). Artificial intelligence in education: The three paradigms. Computers and Education: Artificial Intelligence, 2, Article 100020. https://doi.org/10.1016/j.caeai.2021.100020

Pang, G., Shen, C., Cao, L., Hengel, A. V. D. (2021). Deep learning for anomaly detection: A review. ACM Computing Surveys, 54(2), 1-38. https://doi.org/10.1145/3439950

Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education. Journalism & Mass Communication Educator, 78(1), 84-93. https://doi.org/10.1177/10776958221149577

Pogla, M. (2023, April 24). ChatGPT: Optimizing language models for dialogue. AutoGPT. https://autogpt.net/chatgpt-optimizing-language-models-for-dialogue/

Rospigliosi, P. A. (2023). Artificial intelligence in teaching and learning: What questions should we ask of ChatGPT? Interactive Learning Environments, 31(1), 1-3. https://doi.org/10.1080/10494820.2023.2180191

Saito, Y., Garza, T. J., & Horwitz, E. K. (1999). Foreign language reading anxiety. Modern Language Journal, 83(2), 202-218. https://doi.org/10.1111/0026-7902.00016

Soma, R., Mukminin, A., & Noprival, A. M. (2015). Toward a better preparation of student teachers’ reading skill: The SQ3R strategy with authentic and simplified texts on reading literacy and vocabulary mastery. Journal of Education and Learning, 9(2), 125-134. https://doi.org/10.11591/edulearn.v9i2.1527

Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning and Teaching, 6(1), 31-40. https://doi.org/10.37074/jalt.2023.6.1.17

Susnjak, T., & Maddigan, P. (2023). Forecasting patient flows with pandemic induced concept drift using explainable machine learning. EPJ Data Science, 12, Article 11. https://doi.org/10.1140/epjds/s13688-023-00387-5

Taecharungroj, V. (2023). What can ChatGPT do? Analyzing early reactions to the innovative AI chatbot on Twitter. Big Data and Cognitive Computing, 7(1), Article 35. https://doi.org/10.3390/bdcc7010035

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10, Article 15. https://doi.org/10.1186/s40561-023-00237-x

Tomlinson, B. (2004). Materials development in language teaching (2nd ed.). Cambridge University Press. https://doi.org/10.1017/9781139042789

Walton, G. M., & Cohen, G. L. (2011). A brief social-belonging intervention improves academic and health outcomes of minority students. Science, 331(6023), 1447-1451. https://doi.org/10.1126/science.1198364

Wesche, P. M. B., & Paribakht, P. T. S. (2009). Lexical inferencing in a first and second language: Cross-linguistic dimensions. Multilingual Matters. https://doi.org/10.21832/9781847692245

Yang, S. J., Ogata, H., Matsui, T., & Chen, N. S. (2021). Human-centered artificial intelligence in education: Seeing the invisible through the visible. Computers and Education: Artificial Intelligence, 2, Article 100008. https://doi.org/10.1016/j.caeai.2021.100008

Yi, Y., Cho, S., & Jang, J. (2022). Methodological innovations in examining digital literacies in applied linguistics research. TESOL Quarterly, 56(3), 1052-1062. https://doi.org/10.1002/tesq.3140

Athabasca University

Creative Commons License

Does AI Simplification of Authentic Blog Texts Improve Reading Comprehension, Inferencing, and Anxiety? A One-Shot Intervention in Turkish EFL Context by Ferdi Çelik, Ceylan Yangın Ersanlı, and Goshnag Arslanbay is licensed under a Creative Commons Attribution 4.0 International License.